-
Notifications
You must be signed in to change notification settings - Fork 212
Conversation
Codecov Report
@@ Coverage Diff @@
## master #39 +/- ##
==========================================
- Coverage 87.06% 85.43% -1.64%
==========================================
Files 28 29 +1
Lines 866 927 +61
==========================================
+ Hits 754 792 +38
- Misses 112 135 +23
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am missing docs at least to the classes as the names are not very intuitive :/
I don't like how we allow both a I think it would be cleaner to allow And if the user wants to set their own parameters, have them instantiate the callback themselves and pass it. # unfreezes at epoch 1 by default
trainer.finetune(model, datamodule=datamodule, strategy='freeze_unfreeze')
# if you want to change it
trainer.finetune(model, datamodule=datamodule, strategy=FreezeUnfreeze(unfreeze_epoch=50)) |
Sounds fine to me ! |
Co-authored-by: Carlos Mocholí <[email protected]>
What does this PR do?
This PR makes fine-tuning parametrisation more flexible. If we agree on the API, I will add it to the doc.
Fixes # (issue)
Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃